46 research outputs found

    Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value

    Get PDF
    The artistic or historical value of a structure, such as a monument, a mosaic, a painting or, generally speaking, an artefact, arises from the novelty and the development it represents in a certain field and in a certain time of the human activity. The more faithfully the structure preserves its original status, the greater its artistic and historical value is. For this reason it is fundamental to preserve its original condition, maintaining it as genuine as possible over the time. Nevertheless the preservation of a structure cannot be always possible (for traumatic events as wars can occur), or has not always been realized, simply for negligence, incompetence, or even guilty unwillingness. So, unfortunately, nowadays the status of a not irrelevant number of such structures can range from bad to even catastrophic. In such a frame the current technology furnishes a fundamental help for reconstruction/restoration purposes, so to bring back a structure to its original historical value and condition. Among the modern facilities, new possibilities arise from the Augmented Reality (AR) tools, which combine the virtual reality (VR) settings with real physical materials and instruments. The idea is to realize a virtual reconstruction/restoration before materially acting on the structure itself. In this way main advantages are obtained among which: the manpower and machine power are utilized only in the last phase of the reconstruction; potential damages/abrasions of some parts of the structure are avoided during the cataloguing phase; it is possible to precisely define the forms and dimensions of the eventually missing pieces, etc. Actually the virtual reconstruction/restoration can be even improved taking advantages of the AR, which furnish lots of added informative parameters, which can be even fundamental under specific circumstances. So we want here detail the AR application to restore and reconstruct the structures with artistic and/or historical valu

    Interpretable Convolutional Neural Networks for Decoding and Analyzing Neural Time Series Data

    Get PDF
    Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity

    Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks

    Get PDF
    Despite the well-recognized role of the posterior parietal cortex (PPC) in processing sensory information to guide action, the differential encoding properties of this dynamic processing, as operated by different PPC brain areas, are scarcely known. Within the monkey's PPC, the superior parietal lobule hosts areas V6A, PEc, and PE included in the dorso-medial visual stream that is specialized in planning and guiding reaching movements. Here, a Convolutional Neural Network (CNN) approach is used to investigate how the information is processed in these areas. We trained two macaque monkeys to perform a delayed reaching task towards 9 positions (distributed on 3 different depth and direction levels) in the 3D peripersonal space. The activity of single cells was recorded from V6A, PEc, PE and fed to convolutional neural networks that were designed and trained to exploit the temporal structure of neuronal activation patterns, to decode the target positions reached by the monkey. Bayesian Optimization was used to define the main CNN hyper-parameters. In addition to discrete positions in space, we used the same network architecture to decode plausible reaching trajectories. We found that data from the most caudal V6A and PEc areas outperformed PE area in the spatial position decoding. In all areas, decoding accuracies started to increase at the time the target to reach was instructed to the monkey, and reached a plateau at movement onset. The results support a dynamic encoding of the different phases and properties of the reaching movement differentially distributed over a network of interconnected areas. This study highlights the usefulness of neurons' firing rate decoding via CNNs to improve our understanding of how sensorimotor information is encoded in PPC to perform reaching movements. The obtained results may have implications in the perspective of novel neuroprosthetic devices based on the decoding of these rich signals for faithfully carrying out patient's intentions.(C) 2022 Published by Elsevier Ltd

    Motor decoding from the posterior parietal cortex using deep neural networks

    Get PDF
    Objective. Motor decoding is crucial to translate the neural activity for brain-computer interfaces (BCIs) and provides information on how motor states are encoded in the brain. Deep neural networks (DNNs) are emerging as promising neural decoders. Nevertheless, it is still unclear how different DNNs perform in different motor decoding problems and scenarios, and which network could be a good candidate for invasive BCIs. Approach. Fully-connected, convolutional, and recurrent neural networks (FCNNs, CNNs, RNNs) were designed and applied to decode motor states from neurons recorded from V6A area in the posterior parietal cortex (PPC) of macaques. Three motor tasks were considered, involving reaching and reach-to-grasping (the latter under two illumination conditions). DNNs decoded nine reaching endpoints in 3D space or five grip types using a sliding window approach within the trial course. To evaluate decoders simulating a broad variety of scenarios, the performance was also analyzed while artificially reducing the number of recorded neurons and trials, and while performing transfer learning from one task to another. Finally, the accuracy time course was used to analyze V6A motor encoding. Main results. DNNs outperformed a classic Naive Bayes classifier, and CNNs additionally outperformed XGBoost and Support Vector Machine classifiers across the motor decoding problems. CNNs resulted the top-performing DNNs when using less neurons and trials, and task-to-task transfer learning improved performance especially in the low data regime. Lastly, V6A neurons encoded reaching and reach-to-grasping properties even from action planning, with the encoding of grip properties occurring later, closer to movement execution, and appearing weaker in darkness. Significance. Results suggest that CNNs are effective candidates to realize neural decoders for invasive BCIs in humans from PPC recordings also reducing BCI calibration times (transfer learning), and that a CNN-based data-driven analysis may provide insights about the encoding properties and the functional roles of brain regions

    BCIAUT-P300: A Multi-Session and Multi-Subject Benchmark Dataset on Autism for P300-Based Brain-Computer-Interfaces

    Get PDF
    There is a lack of multi-session P300 datasets for Brain-Computer Interfaces (BCI). Publicly available datasets are usually limited by small number of participants with few BCI sessions. In this sense, the lack of large, comprehensive datasets with various individuals and multiple sessions has limited advances in the development of more effective data processing and analysis methods for BCI systems. This is particularly evident to explore the feasibility of deep learning methods that require large datasets. Here we present the BCIAUT-P300 dataset, containing 15 autism spectrum disorder individuals undergoing 7 sessions of P300-based BCI joint-attention training, for a total of 105 sessions. The dataset was used for the 2019 IFMBE Scientific Challenge organized during MEDICON 2019 where, in two phases, teams from all over the world tried to achieve the best possible object-detection accuracy based on the P300 signals. This paper presents the characteristics of the dataset and the approaches followed by the 9 finalist teams during the competition. The winner obtained an average accuracy of 92.3% with a convolutional neural network based on EEGNet. The dataset is now publicly released and stands as a benchmark for future P300-based BCI algorithms based on multiple session data

    Human, Nature, Dynamism: The Effects of Content and Movement Perception on Brain Activations during the Aesthetic Judgment of Representational Paintings

    Get PDF
    Movement perception and its role in aesthetic experience have been often studied, within empirical aesthetics, in relation to the human body. No such specificity has been defined in neuroimaging studies with respect to contents lacking a human form. The aim of this work was to explore, through functional magnetic imaging (f MRI), how perceived movement is processed during the aesthetic judgment of paintings using two types of content: human subjects and scenes of nature. Participants, untutored in the arts, were shown the stimuli and asked to make aesthetic judgments. Additionally, they were instructed to observe the paintings and to rate their perceived movement in separate blocks. Observation highlighted spontaneous processes associated with aesthetic experience, whereas movement judgment outlined activations specifically related to movement processing. The ratings recorded during aesthetic judgment revealed that nature scenes received higher scored than human content paintings. The imaging data showed similar activation, relative to baseline, for all stimuli in the three tasks, including activation of occipito-temporal areas, posterior parietal, and premotor cortices. Contrast analyses within aesthetic judgment task showed that human content activated, relative to nature, precuneus, fusiform gyrus, and posterior temporal areas, whose activation was prominent for dynamic human paintings. In contrast, nature scenes activated, relative to human stimuli, occipital and posterior parietal cortex/precuneus, involved in visuospatial exploration and pragmatic coding of movement, as well as central insula. Static nature paintings further activated, relative to dynamic nature stimuli, central and posterior insula. Besides insular activation, which was specific for aesthetic judgment, we found a large overlap in the activation pattern characterizing each stimulus dimension (content and dynamism) across observation, aesthetic judgment, and movement judgment tasks. These findings support the idea that the aesthetic evaluation of artworks depicting both human subjects and nature scenes involves a motor component, and that the associated neural processes occur quite spontaneously in the viewer. Furthermore, considering the functional roles of posterior and central insula, we suggest that nature paintings may evoke aesthetic processes requiring an additional proprioceptive and sensori-motor component implemented by “motor accessibility” to the represented scenario, which is needed to judge the aesthetic value of the observed painting
    corecore